online content
EU opens investigation into Google's use of online content for AI models
Google runs the Gemini AI model and is owned by Alphabet. Google runs the Gemini AI model and is owned by Alphabet. EU opens investigation into Google's use of online content for AI models Tue 9 Dec 2025 05.06 ESTFirst published on Tue 9 Dec 2025 03.48 EST The EU has opened an investigation to assess whether Google is breaching European competition rules in its use of online content from publishers and YouTube creators for artificial intelligence. The European Commission said on Tuesday it will examine whether the US tech company, which runs the Gemini AI model and is owned by Alphabet, is putting rival AI owners at a "disadvantage". "The investigation will notably examine whether Google is distorting competition by imposing unfair terms and conditions on publishers and content creators, or by granting itself privileged access to such content, thereby placing developers of rival AI models at a disadvantage," the commission said.
- North America > United States (0.71)
- Oceania > Australia (0.06)
- Asia (0.05)
- (3 more...)
Can Large Language Models be Effective Online Opinion Miners?
Heo, Ryang, Seo, Yongsik, Lee, Junseong, Lee, Dongha
The surge of user-generated online content presents a wealth of insights into customer preferences and market trends. However, the highly diverse, complex, and context-rich nature of such contents poses significant challenges to traditional opinion mining approaches. To address this, we introduce Online Opinion Mining Benchmark (OOMB), a novel dataset and evaluation protocol designed to assess the ability of large language models (LLMs) to mine opinions effectively from diverse and intricate online environments. OOMB provides extensive (entity, feature, opinion) tuple annotations and a comprehensive opinion-centric summary that highlights key opinion topics within each content, thereby enabling the evaluation of both the extractive and abstractive capabilities of models. Through our proposed benchmark, we conduct a comprehensive analysis of which aspects remain challenging and where LLMs exhibit adaptability, to explore whether they can effectively serve as opinion miners in realistic online scenarios. This study lays the foundation for LLM-based opinion mining and discusses directions for future research in this field.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Europe > Austria > Vienna (0.14)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (16 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (0.67)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Automobiles & Trucks > Manufacturer (1.00)
New research could block AI models learning from your online content
"Noise" protection can be added to content before it's uploaded online. A new technique developed by Australian researchers could stop unauthorised artificial intelligence (AI) systems learning from photos, artwork and other image-based content. Developed by CSIRO, Australia's national science agency, in partnership with the Cyber Security Cooperative Research Centre (CSCRC) and the University of Chicago, the method subtly alters content to make it unreadable to AI models while remaining unchanged to the human eye. This could help artists, organisations and social media users protect their work and personal data from being used to train AI systems or create deepfakes. For example, a social media user could automatically apply a protective layer to their photos before posting, preventing AI systems from learning facial features for deepfake creation.
- Oceania > Australia (0.27)
- North America > United States > Illinois > Cook County > Chicago (0.27)
Why is X suing the Indian government as Musk woos Modi?
When Elon Musk met Narendra Modi in Washington DC in February, the SpaceX and Tesla chief presented India's prime minister with a gift and introduced him to his family. Modi described the meeting as "very good". Modi was in the United States to see President Donald Trump. In Modi's meeting with Musk, the two talked about collaborating in the fields of artificial intelligence (AI), space exploration, innovation and sustainable development, according to India's Ministry of External Affairs. But almost a month later, Musk's social media platform X has filed a lawsuit against the Indian government, alleging that New Delhi is unlawfully censoring content online. The lawsuit comes as Musk edges closer to launching both Starlink and Tesla in India.
- Asia > India > NCT > New Delhi (0.26)
- North America > United States > District of Columbia > Washington (0.26)
- Asia > India > Karnataka (0.06)
- Law (1.00)
- Government > Regional Government > Asia Government > India Government (1.00)
Google fined 250m in France for breaching intellectual property rules
Google has been fined 250m ( 213m) by French regulators for breaching an agreement over paying media companies for reproducing their content online. France's competition watchdog said on Wednesday that it was fining the US tech company for breaches linked to intellectual property rules related to news media publishers. The regulator also cited concerns about Google's AI service. The competition authority said Google's AI-powered chatbot Bard – since rebranded as Gemini – was trained on content from publishers and news agencies without notifying them. The watchdog said in a statement that the fine was for "failing to respect commitments made in 2022" and accused Google of not negotiating in "good faith" with news publishers on how much to compensate them for use of their content.
- Media > News (1.00)
- Government > Regional Government > Europe Government (0.54)
TechScape: Are social media giants silencing online content?
As the ongoing conflict between Israel and Hamas and its devastating effects play out in real time on social media, users are continuing to criticise tech firms for what they say is unfair content censorship – pulling into sharp focus longstanding concerns about the opaque algorithms that shape our online worlds. From the early days of the conflict, social media users have expressed outrage at allegedly uneven censorship of pro-Palestinian content on platforms like Instagram and Facebook. Meta has denied intentionally suppressing the content, saying that with more posts going up about the conflict, "content that doesn't violate our policies may be removed in error". But a third-party investigation (commissioned by Meta last year and conducted by the independent consultancy Business for Social Responsibility) had previously determined Meta had violated Palestinian human rights by censoring content related to Israel's attacks on Gaza in 2021, and incidents in recent weeks have revealed further issues with Meta's algorithmic moderation. Instagram's automated translation feature mistakenly added the word "terrorist" to Palestinian profiles and WhatsApp, also owned by Meta, created auto-generated illustrations of gun-wielding children when prompted with the word "Palestine".
- Asia > Middle East > Israel (0.83)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.62)
- North America > United States > California (0.05)
- Europe > France (0.05)
- Law > Civil Rights & Constitutional Law (1.00)
- Information Technology (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.51)
Detection of Offensive and Threatening Online Content in a Low Resource Language
Adam, Fatima Muhammad, Zandam, Abubakar Yakubu, Inuwa-Dutse, Isa
Hausa is a major Chadic language, spoken by over 100 million people in Africa. However, from a computational linguistic perspective, it is considered a low-resource language, with limited resources to support Natural Language Processing (NLP) tasks. Online platforms often facilitate social interactions that can lead to the use of offensive and threatening language, which can go undetected due to the lack of detection systems designed for Hausa. This study aimed to address this issue by (1) conducting two user studies (n=308) to investigate cyberbullying-related issues, (2) collecting and annotating the first set of offensive and threatening datasets to support relevant downstream tasks in Hausa, (3) developing a detection system to flag offensive and threatening content, and (4) evaluating the detection system and the efficacy of the Google-based translation engine in detecting offensive and threatening terms in Hausa. We found that offensive and threatening content is quite common, particularly when discussing religion and politics. Our detection system was able to detect more than 70% of offensive and threatening content, although many of these were mistranslated by Google's translation engine. We attribute this to the subtle relationship between offensive and threatening content and idiomatic expressions in the Hausa language. We recommend that diverse stakeholders participate in understanding local conventions and demographics in order to develop a more effective detection system. These insights are essential for implementing targeted moderation strategies to create a safe and inclusive online environment.
- North America > United States > Texas > Harris County > Houston (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Africa > Nigeria > Jigawa State > Dutse (0.05)
- (5 more...)
- Research Report (1.00)
- Questionnaire & Opinion Survey (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (0.94)
- (2 more...)
I felt numb – not sure what to do. How did deepfake images of me end up on a porn site?
There was an insistent knock at the door. This in itself was startling – it was the winter of 2020 and we hadn't yet returned to socialising indoors after lockdown. When I answered, I was surprised to see a male acquaintance of mine. He said he needed to speak to me. I knew it was something unprecedented because he asked to come in. He told me to sit down. That's when the adrenaline started coursing through me – people only suggest that when they're about to deliver bad news.
- Europe > United Kingdom > Wales (0.04)
- Europe > United Kingdom > England > South Yorkshire (0.04)
Not By AI Badges -- A Badge for Your AI-free Content
The Not By AI badge is created to encourage more humans to produce original content and help users identify human-generated content. The Ultimate goal is make sure humanity continues to advance. An expert estimates that 90 percent of online content could be generated by AI by 2025.1 With the surge in AI-generated content, it is important to note that AI is trained on human-generated content. If humans rely solely on AI to generate content moving forward, any new content generated by AI may just be recycled content from the past. This could pose a major obstacle to human advancement. Only by limiting the reliance on AI and continue to create original content can propel us forward as a species.
Lewis Silkin - AI 101: The Regulatory Framework
Back in April 2021, the European Commission published its proposal for the Artificial Intelligence Regulation ("AI Regulation), which is currently making its way through the European legislative process. This draft AI Regulation seeks to harmonise rules on artificial intelligence by ensuring AI products are sufficiently safe and robust before they enter the EU market. The AI Regulation is intended to apply to what the EU terms "AI systems". The most recent iteration of this concept is defined (in summary) as all systems developed through machine learning approaches and logic, and knowledge-based approaches. This is a wide definition aimed to accommodate future developments in AI technology but extends to much of modern AI software. The broad scope of this definition is narrowed by the operational impact of the draft legislation, as the AI Regulation takes a'risk-based approach' to governing AI systems.
- Law > Statutes (1.00)
- Government > Regional Government > Europe Government (0.69)